In order to bridge the communication gap between the deaf and the dumb, sign language is a special kind of communication language. There are several signs in each sign language with differences in palm size, shape, motion, and positioning of the hand, all of which are important components of each sign. Many different researchers have proposed numerous applications. Utilising deep learning principles, several notable advancements have been made in these applications during the last few years. Throughout this survey, we evaluated various hand gesture detection applications utilising recent deep learning theories. Despite significant advancements in hand gesture detection accuracy, there are still a number of issues that need to be handled. In our proposal,
Problem Statement: Conversion of sign language using hand gestures into text and audio for deaf and blind people.
Introduction
I. INTRODUCTION
The most common form of signing communication is based on American gestures. The major problem that people with hearing loss have is communication gaps. As a result, the solution for them is to use hand signals to fill in the correspondence gap. While corresponding, ideas like speech, signals, and pictures can be exchanged. People with hearing loss use a variety of hand gestures to convey their ideas. Giving information through correspondence involves speaking, writing, or using another medium.
These people express themselves and offer their opinions through signs. Implicit trade considerations are hand motions, and vision is used to detect these signals. Communication through gestures is the nonverbal exchange of information between visually impaired and hard of hearing people. The hand gesture is a nonverbal method of communicating. It has semantic content that can signers can use to communicate a ton of information. Thus, there is a great deal of interest in programmed hand signal acknowledgment. Numerous scientists have been interested in this area since the turn of the twenty-first century. The following factors [1] have contributed to the expansion of the meaning of programmed hand signal recognition: (1) the increasing pace of the nearly senseless population, and (2) the use of vision-based and touchless devices like video games.
II. LITERATURE SURVEY
“Sign Language Recognition: A Deep Survey” by Razieh Rastgo, Kourosh Kiania, Sergio Escalera
Sign language is a form for the deaf. There are numerous varieties of sign languages in the world. And one of them is American Sign Language. We developed a model that will be beneficial to the deaf. The signs will be translated into simple sentences using this model.
2. “Hand Gesture Recognition for Sign Language Using 3DCNN” by Muneer Al-Hammadi, Ghulam Muhammad , Wadood Abdul, Mansour Al-Sulaima
The communication difficulties experienced by deaf people are highlighted in this research paper. This essay analyzes information about sign language that includes human hands.
3. “Dynamic Sign Language Recognition Based on Video Sequence With BLSTM-3D Residual Networks” by Yanqiu Liao, Pengwen Xiong, Weidong MIN, Weiqiong MIN,Jiahao LU1
This paper suggests using 2-D image sampling to teach sign language. This will assist in resolving the deaf people's communication issue. For better outcomes, this model will train on data and concatenate the data with the model..
4. “Machine Learning based Hand Sign Recognition” by Ms. Greeshma Pala, Ms. Jagruti Bhagwan Jethwani, Mr. Satish Shivaji Kumbhar, Ms. Shruti Dilip Patil
In todays world, where barriers for the deaf are being removed at an increasing rate, automatic translation systems are very helpful between hearing people and deaf people will be eliminated By resolving this issue,the communication gap
5. “Sign Language Recognition Using Neural Network” by Shailesh Bachani, Shubham Dixit,Rohin Chadha,Prof. Avinash Bagul
The use of sign language is for communication. For a person to communicate without speaking a word, sign language includes some hand gestures. Although other people shy away from learning this language, deaf people find it to be very popular. As a result, it makes communication difficult and, in a way, leads to the isolation of people with physical disabilities.
6. “Indian Sign Language Based Static Hand Gesture Recognition Using Deep Learning” by SGnanapriya , Dr. K.Rahimunnisa, AKarthika, MGokulnath, K Logeshkumar As sign language is the main language of deaf and dumb people. So it is difficult for a normal person to talk with them as normal people don't understand their language. So a framework for recognizing Sign
Language has been introduced.
7. “A New Benchmark on American Sign Language Recognition using Convolutional Neural Network”
These days, it seems that complicated hand movements, along with their constantly changing shapes and positions, present a challenging problem. CNN (Convolutional Neural Network) will be used to address issues like these.
III. PROPOSED SYSTEM APPROACH
The unable or deaf person should submit a gesture or sign image to the system in the proposed system. The system uses a mat lab image processing technique to analyse the sign input and classifies it for recognised identification. When the input image matches the specifi ed dataset, it then starts the voice media through the system. In addition, the output will be displayed in text format. This is a working prototype for the conversion of sign language to speech and text.
IV. CONVOLUTIONAL NEURAL NETWORK(CNN) ALGORITHM
Convolutional Neural Network (CNN) architecture accepts speech signals or 2-D structured images as input. This is accomplished through a connection, followed by a pooling process that produces a variety of features. The fact that Convolutional Neural Networks (CNN) are simpler to train and have fewer parameters than other networks with a similar percentage of hidden units is one of their most significant advantages.
Input: Hand gestures through camera
Output: Recognize the input hand gestures and then make a full sentence. And the play feature will read that sentence out loud.
V. YOLO(YOU ONLY LOOK ONCE)
YOLO (You Only Look Once), is an algorithm for object detection. The object detection task consists in determining the location on the image where certain objects are present, as well as classifying those objects. Previous methods for this, like R-CNN and its variations, used a pipeline to perform this task in multiple steps. This can be slow to run and also hard to optimize, because each individual component must be trained separately. YOLO, does it all with a single neural network. The input image is divided into an S x S grid of cells. For each object that is present on the image, one grid cell is said to be “responsible” for predicting it. That is the cell where the centre of the object falls into. Each grid cell predicts B bounding boxes as well as C class probabilities. The bounding box prediction has 5 components: (x, y, w, h, confidence). The (x, y) coordinates represent the centre of the box, relative to the grid cell location (remember that, if the centre of the box does not fall inside the grid cell, then this cell is not responsible for it). These coordinates are normalized to fall between 0 and 1. The (w, h) box dimensions are also normalized to [0, 1], relative to the image size.
References
[1] XRazieh Rastgo, Kourosh Kiania, Sergio Escalera, “Sign Language Recognition: A Deep Survey” (July-2020)
[2] Muneer Al-Hammadi, Ghulam Muhammad ,Wadood Abdul, Mansour Al-Sulaima, “Hand Gesture Recognition for Sign Language Using 3DCNN” (April-2020)
[3] Yanqiu Liao, Pengwen Xiong, Weidong MIN, Weiqiong MIN,Jiahao LU1,“Dynamic Sign Language Recognition Based on Video Sequence With BLSTM- 3D Residual Networks” (March-2019)
[4] Ms. Greeshma Pala, Ms. Jagruti Bhagwan Jethwani, Mr. Satish Shivaji Kumbhar, Ms. Shruti Dilip Patil, “Machine Learning-based Hand Sign Recognition” (June-2021)
[5] Shailesh bachani, Shubham dixit, Rohin chadha, Prof. Avinash Bagul,“Sign Language Recognition Using Neural Network” (April-2020)
[6] SGnanapriya , Dr. K. Rahimunnisa, A Karthika, M Gokulnath, K Logeshkumar, “Indian Sign Language Based Static Hand Gesture Recognition Using Deep Learning” (June-2020)
[7] Md. Moklesur Rahman, Md. Shafiqul Islamy, Md. Hafizur Rahmanz, Roberto Sassix, Massimo W. Rivolta and Md Aktaruzzaman, “A New Benchmark on American Sign Language Recognition using Convolutional Neural Network” (May-2020)
[8] Sakshi Sharma, Sukhwinder Singh, “Vision-based sign recognition system: A Comprehensive Review” (June- 2020)
[9] Deaf Cambridge Dictionary(2018). Retrieved from Cambridge Dictionary: https://dictionary.cambridge.org/dictionary/engl ish/deaf
[10] Jaoa Carriera, A. Z. (2018). Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset. Computer Vision and Pattern